60 - Recap Clip 10.3: Explanation-Based Learning [ID:30463]
50 von 67 angezeigt

We're doing.

We're integrating knowledge into.

Learning and the price for that of course is that we have to have to use logic.

Because that's the only real.

With real I mean compositional way of.

Representing knowledge.

In learning algorithms.

We started out with.

What we call explanation based learning you basically try to understand.

Explanations.

Or worked examples.

And the idea here is that.

You get the explanation.

From trying to prove what you're seeing.

Using the background knowledge.

And our example has been trying to understand.

Differentiation.

In particular doing simple stuff.

Like simplification simplification that was a really.

Our example.

And the idea here is that you generate.

An explanation by yourself that explains the observation.

Namely that one in this case one times zero plus x.

Simplifies to something.

Then you generate a proof and then you try to generalize.

The explanation and you do that by just running exactly the same proof the same explanation.

On a variable eyes version of the observation.

That actually.

You have instantiations here one zero and so on which in this proof are what you had in the example but if you rerun the proof some of these things actually get forced on you.

By.

Instantiating the variables and what you're ended and what you end up with is if you collect the leaves of the proof.

That actually gives you in the usual way.

If you assume the leaves you get the observation.

Form of cheating.

You get a formula like this and now you can.

You can realize that this thing here.

It's actually true.

Irrespective of the or for all the just from the background knowledge which tells you that actually you're left over with this kind.

Of a more general rule.

Okay, so that's the rule we've learned from that example by generating our own explanation rerunning it and then pruning the tree.

There's a couple of things we can do here.

We can.

Instead of just collecting the leaves here.

We can collect.

We can collect.

Any roots of sub trays.

In this case, the prem is actually a much better choice because it's more general than our bar and.

That gives us different ways of learning new rules.

You want to learn rules that where the preconditions are what we say operational namely you can check them easily without a lot of inference.

And you want to optimize the size of the proof that's left over because that's actually the steps.

Teil eines Kapitels:
Recaps

Zugänglich über

Offener Zugang

Dauer

00:06:26 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 11:26:57

Sprache

en-US

Recap: Explanation-Based Learning

Main video on the topic in chapter 10 clip 3.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen